专利摘要:
The invention relates to a device for visualizing the inside of a mouth of a patient (13), the display device comprising a penetrating ray emitter (2) adapted to take a view of an inner part (14, 22, 23, 27, 28, 31, 32, 33, 34, 35, 36, 37, 50, 51, 62, 63) located under an outer surface of a member disposed in the mouth. According to the invention, the device comprises a pair of augmented reality glasses (1) having, on the one hand, an optical glass through which a user (6) of the pair of glasses (1) can see the inside of the mouth, and, secondly, a viewing camera adapted to take an image that the user (6) sees through the optical glass, a central unit being adapted to correlate first images corresponding to those taken by the viewing camera with second images corresponding to those taken by the penetrating ray transmitter (2).
公开号:FR3032282A1
申请号:FR1550817
申请日:2015-02-03
公开日:2016-08-05
发明作者:Francois Duret
申请人:Francois Duret;
IPC主号:
专利说明:

[0001] The present invention relates to a device for visualizing the inside of a mouth, in particular a device making it possible to see an internal part located under an external surface of a member 5 disposed in the mouth. mouth. Known devices for visualization in the field of dental treatments of invasive sub-gingival, coronary, root or bone parts include the following devices: a penetrating ray emitter such as a radiology apparatus working in 2D (conventional radiography) or radiovisiography), in 2D 1/2 or in 3D (scanner, cone beam beam (usually called cone beam), panoramic or ortho-pantomograph) or an MRI apparatus, an ultrasonic apparatus, an apparatus working with terahertz radiation or apparatus working with techniques derived from holographic interferometry (OCT); possibly an endoscopic camera for taking impressions by optical means (radiation ranging from deep blue to X-ray, or even ultrasonic) using or not a structured projected light; and a remote display screen making it possible to see, on the one hand, the view of the modeling resulting from the penetrating ray emitting apparatus and, on the other hand, the modeling obtained after the scanning carried out using the endobuccal camera displayed after the clinician has performed the optical impression. X-ray machines used in dental treatments can be divided into two broad categories, those close to the dental unit and those being deported. In the first category, we find devices using silver, phosphoric or digital media (radiovisiography or RVG).
[0002] Although silver cameras are being used less and less, the other two are not used because they make it possible to digitize indirectly (phosphoric supports) or directly (RVG) the pixelated radiological image obtained from the bone tissues by means of 5 In both cases, the resulting image is scanned in grayscale and displayed in 2D on a screen close to the practitioner in black and white or in reconstituted virtual colors. This image allows him to know the sub-gingival state of the bone tissue but also the crowns and roots of the teeth. The clinician reports and intuitively matches the visualized forms seen on the 2D screen to the visible parts in the mouth of his patient. This allows him to have a very rough idea of the shape and length of a root, whether there are pathological images and to imagine the position of nerves and large blood vessels. If he wants to follow also in time if his treatment is effective or not he will have to make several successive snapshots. With the advent of more demanding dentistry, particularly for periodontal and implantology treatments, more complex apparatuses representing a second category have been used. These devices are rarely in the dental office but they allow the dentist to have a general view of the entire mouth in 2D, 2D 1/2 or even 3D if it uses resonance radiation (MRI). In this category we find for the last thirty years the oral scanners (pantographs, panoramic) giving 2D images of the whole arcade on a single shot, the tomodensitometers giving 2D 1/2 images allowing, thanks to the different shots of 30 voxels, to reconstitute a false 3D image (scanner) and more recently the cone beam associating the advantages of the traditional scanner and the CT scanner giving a very fast and much more accurate 2D 1/2 image of the bone tissues. These latter images are widely used in implantology where the practitioner must know exactly the position of the underlying organs such as the sinuses and the various bone structures when preparing the site of reception of his future implant. In all cases, these 2D 1/2 (or false 3D) spatial images are represented on a 2D remote screen allowing it to be moved in the three planes of the space and to know where the areas of interest or risk areas. Some practitioners, finally, use real 3D images under MRI, but this is still rare and very expensive. In this case, again, the visualization will be done on a remote monitor. Recently, and given the imprecision of the radiological image, some clinicians have decided to associate with the inaccurate radiological image (100-200 microns) an image of the much more precise external part (10 to 30 microns) obtained at using an endoscopic optical impression camera. By fusing between the first image and the second, they obtain on the remote 2D screen a confused view of the underlying tissues and organs and the optical impression of the teeth and the gingiva. Unfortunately, if the knowledge of the proximity of an underlying organ is acceptable to within a hundred micron, it is not the same for the accuracy of a crown or implant shaft which must be known to about ten micron. If they use the systems described above for the sub-gingival view, they also need an optical impression camera to have a sufficiently accurate external view. Today stemming directly from the work of the inventor François Duret, there exist different types of dental endobuccal impression taking method that can be fused on a radiological image. We find there: - those projecting on the tooth a structured light which can be a point, a line or even a complete grid. They have been widely known for several decades and are very well described in the article by G Hausler and Col "light sectioning with large depth and high resolution" in Appl. Opt. 27 (1988). They can use, for example, projections of variable pitch gates ("digital stereo camera" SPIE vol 283 3-D, 1981), the principle of the profilometric phase (Duret US5.092.022 and U S4.952.149) whose CEREC (Sirona GmbH), which combines fringe projection and phase variations of Hint-Els (USA) or the parallel confocal principle such as Itero (US.0109559) from Cadent (USA), is best known. - Those that do not use active or structured light projection but stereoscopic interferometry. This is the case of the 3M Lava AWS camera (Rohaly and Co US Pat. No. 7,372,642) or the Duret and V & O Querbes Condor camera (US Pat. No. 8,520,925). If we can say that all these works and inventions have led to many achievements and more than twenty commercially available systems (F. Duret, Floss No 63, May 2011, "the great adventure of CAD / CAM at IDS of Cologne »14-26), none proposed an original solution allowing to visualize the imprint of the visible and invisible parts directly in mouth during and after their realizations. All these methods described, implemented in dental offices or elsewhere for large radiology devices, use the same visualization system: a remote screen near or away from the operator. Whatever the complexity of these devices, all the cameras or radiology devices that we have just described is associated with a screen. It can overcome a cart, be connected or dependent (ah l in one) a computer or be all or part of a laptop or tablet. If it is a computer monitor (video, plasma, LCD, or LED). The screen is specific to the application, radiological or resulting from taking the optical impression. Sometimes it combines the two methods (Planmeca, Carestream) by displaying in two separate windows the video view from the camera and the modeled view from digital radiological and / or intraoral treatment.
[0003] On the same screen can be displayed interactive practitioner to complete the information relating to the patient: the medical characteristics and care to do or already done. This is called the patient card. In this case, it is not a problem to display this information on a remote screen insofar as the elements contained in this form are rarely completed during the acts or do not need to be viewed during these actions. . Even if this has already led to the realization of augmented reality application, it is for us of little interest for the health of the patient. It is not the same for the display of its physiological data during the intervention, as we will see in the accessory applications of our invention. The digital central processing unit collects and processes information from the endobuccal camera and X-ray devices and displays them on the display screens. We immediately understand that the first problem that the operator encounters is having to look at one or more screens deportee the radiological view and that from his endobuccal camera. If he uses a film medium he has no other possibility to use a light box. This forces him to look away and never have a precise match between his clinical space, which is what he sees in his patient's mouth, and the sub-gingival area known radiologically and displayed on the monitor. We understand why the clinician must constantly shift his gaze from his operative field to the deported image. Moreover, if augmented reality indications are given to him on the deported screen, he must make not only the effort to shift his gaze from his operative field to the monitor, but also to transpose by the mind and virtually these indications and information visible on the 2D remote screen up to the 30 operative field with the risk of being inaccurate or badly done. This is extremely random, especially since the only region corresponding to a common volume between the visible part and the sub-gingival part allowing a correlation by the mind is in the radiological view displayed in 2D on the screen whereas in 35 mouth his vision is three-dimensional. The operation is so imprecise in implantology that clinicians need to use tooth-fixed guides to prevent their forests from damaging the underlying tissue. We understand easily that to see indirectly the course 5 and the result of one's work is dangerous for the patient, imprecise, incomplete and extremely damaging in daily practice. We can summarize the problems that this display mode causes on a remote screen in the following way: This forces the latter to constantly move between the part of the body on which it is working and the remote screen. Indeed if the practitioner wishes to follow the evolution of his endodontic work or surgery, he must leave the view of the body area on which he works and watch on his video screen (monitor) or digital to guess where his location is. job. - This movement can lead to movements of his hands harmful, imprecise and uncontrolled during his work, problem all the more important if he works long (fatigue). - This movement is dangerous because his eyes regularly leave the operative field at the risk of causing injury in the patient's mouth or body or fracturing his instruments. This is also very tiring because the existence of a remote screen forces eye gymnastics at a very high rate. It is thus possible to have more than 20 movements back and forth from his eyes per minute. - This excludes any complementary information directly correlated to the visualized field as the augmented reality allows today. The fact of having no correlation between the real view and the information resulting from the augmented reality for example on a remote screen excluded all real time and any precise information in the field of operation. Even if this information appears on the remote screen, the visualization will never be in real time and the clinician's gesture precisely positioned in the field of work.
[0004] This action is imprecise: we see that if it is possible to see the underlying tissues on a remote screen, the direct visualization of his work is never secure because to move his eyes and change during his work the clinical action area makes it difficult to correlate the two observations. No true correlation between this RX representation and the field of work exists because of the use of the remote screen. It is the same for any information from augmented reality software reported on the remote screen.
[0005] This operation is insufficient: the RX radiation produces a 2D or 2D 1/2 visualization carried over on a 2D screen which makes it particularly difficult or impossible to estimate what has been radiographed compared to what is actually presented with regard to the operator in 3D vision.
[0006] This medical procedure is not secure: we can say that no simple and especially secure solution was found to meet the clinician's need. For his gesture to be secure, he must see his area that has been X-rayed and the area where he works confused in real time in the same repository. This is the prerequisite to work safely, quickly, in total comfort and with the precision required for this type of intervention. The present invention aims to overcome these drawbacks mentioned above by proposing a new display device.
[0007] The invention relates to a device for visualizing the inside of a patient's mouth, the display device comprising a penetrating ray emitter adapted to take a view of an internal part situated under an external surface of an organ. disposed in the mouth, characterized in that it comprises a pair of augmented reality glasses having, on the one hand, an optical glass through which a user of the pair of glasses can see the inside of the mouth, and, on the other hand, a viewing camera adapted to take in image what the user sees through the optical glass, a central unit being adapted to correlate first images corresponding to those taken by the display camera with corresponding second images to those taken by the penetrating ray transmitter. According to a first embodiment, the central unit is adapted to orient the second images according to the orientation of the pair of augmented reality glasses. According to a second embodiment, the central unit is adapted to project on the optical glass the correlation of the first images with the second images. According to a feature of the second embodiment, the central unit 10 is adapted to project on the optical glass, on user's command, images of a selection of anatomical components of the organ taken by the ray emitter. penetrating. According to a third embodiment, the display device comprises a medical treatment instrument 15 comprising, on the one hand, a tool which is adapted to treat the anatomical components of an organ with which it is in contact, and, of on the other hand, a marker which is adapted to be located spatially during the treatment of the anatomical components, and in that the central unit is adapted to know the dimensions of the tool and the distance separating the tool from the marker, and to determine the position of the tool in the organ during treatment. According to a first feature of the third embodiment, the central unit is adapted to make third images which represent the tool used for the processing, to correlate them with the second images, and to project the correlation so as to allow the visualization of the tool in the organ being processed. According to a second feature of the third embodiment, the length of the displacement of the tool being equal to the length of the displacement of the marker, the central unit is adapted to determine the direction and direction of movement of the tool relative to to the anatomical components with which it is in contact, the direction and direction of movement of the tool being equal to the direction and the direction of displacement of the marker if the tool is indeformable with respect to these anatomical components, or determined by the relief of these anatomical components if the tool is deformable with respect to these. According to a third feature of the third embodiment, the central unit is adapted to determine the ideal movement of the tool used for carrying out a treatment. According to an advantageous embodiment of the third feature of the third embodiment, the central unit is adapted to guide the user so that the tool used follows the ideal movement. According to a first preferential mode of the advantageous embodiment of the third feature of the third embodiment, the guidance of the user is achieved by displaying the ideal movement on the optical glass correlated with the second images. According to a second preferential mode of the advantageous mode of the third feature of the third embodiment, the guidance 15 of the user is achieved by the emission of a sound signal depending on the position of the tool used. According to a fourth feature of the third embodiment, the tool used is identified by an identifier and in that the central unit is adapted to receive the identifier and determined the corresponding tool. According to an advantageous embodiment of the fourth feature of the third embodiment, the central unit comprises a library of identifiers, each identifier corresponding to a tool forming part of the display device. According to a fourth embodiment, the display device comprises an optical impression camera adapted to take an optical impression of an external surface of a member disposed in the mouth, the central unit being adapted to correlate fourth images corresponding to those taken by the optical impression camera with the first images. According to a fifth embodiment, the correlation of the images made by the central unit is a superposition and / or a replacement of the images on the optical glass.
[0008] According to a sixth embodiment, the central unit is adapted, on the order of the user, to modify the contrast and the transparency of the images that it processes. According to a seventh embodiment, the penetrating ray emitter is adapted to digitally transmit the images it takes to the central unit. According to an eighth embodiment, the display device comprises a scanning device adapted to digitize non-digital images transmitted by the penetrating ray transmitter and to transmit the digitized images to the central unit. According to a ninth embodiment, the central unit is adapted to project on the optical glass additional information relating to the patient. According to a first feature of the ninth embodiment, the additional information relating to the patient includes data to be respected for producing a dental prosthesis. According to a second feature of the ninth embodiment, the display device comprises at least one peripheral instrument connected to the central unit and adapted to capture additional information relating to the patient. According to an advantageous embodiment of the second feature of the ninth embodiment, one of the peripheral instruments 25 makes it possible either to capture the static occlusion and the mandibular movements, or to capture the color of the teeth, or to capture the shape of the face, to capture the physiological data of the patient. According to a tenth embodiment, the display device comprises a microphone adapted to pick up control commands from the user and transmit them to the central unit. According to an eleventh embodiment, the pair of augmented reality glasses comprises a spatial tracking instrument.
[0009] According to a twelfth embodiment, the display device comprises a lighting system adapted to illuminate the organ disposed in the mouth. According to a feature of the twelfth embodiment, the lighting system comprises light-emitting diodes whose wavelength is adapted to allow the identification of pathologies. According to a thirteenth embodiment, the central unit is adapted to project onto a remote screen images relating to the organ disposed in the mouth. According to a fourteenth embodiment, the central unit is adapted to control a numerically controlled machine for producing a prosthesis relative to the organ disposed in the mouth. Thus, the device according to the invention combines, in the same field, perfectly correlated or very close, the direct visualization through augmented reality glasses of the operative area that the practitioner sees in the mouth or on the patient's face through of his glasses, the modeling obtained by radiography (RX, ultrasound, MRI or holographic interferometry - OCT), possibly supplemented with the modeling resulting from the processor resulting from reading in optical impression of a very precise endoscopic camera and all the complementary information which can help the surgical procedure they themselves correlated in the same reference frame. By complementary information we mean, and this is only an example, the path followed by an instrument of canal treatment, surgical treatment or normally invisible implantology forests if we do not use X-rays. extremely important because it should allow to follow, without increasing exposure to RX, real-time gestures in the mouth without being visible through normal glasses. This invention thus fully addresses the problems set forth by providing a scalable, inexpensive and usable solution in any dental office in a simplified and patient-friendly form. In particular, it responds to the many problems mentioned above: - by this new and original organization the practitioner can see through his augmented reality glasses, in the same field that is to say in the mouth of his patient, (a ) the part of the body that he is analyzing and on which he is working, (b) subgingival and bone views obtained from X-ray, ultrasound, MRI or holographic interferometry (OCT _), (c) possibly he wants precision, the modeling that he obtains by optical impression with his intraoral camera reading three-dimensional, the three views being totally confused without the help of the remote screen. Indeed if the practitioner wishes to follow the evolution of his surgical work (implantology, extractions ...) or endodontics, he will see by overprinting or any other visualizable form such as a variation of intensity, color or contrast, and this is only given as an example, the supra-gingival surface (teeth and gums ...) and the subgingival part (bones, nerves, vessels, sinus _) without leaving the view of the area of the body on which he works and makes his diagnosis. He can thus control in real time or deferred the environment and the result of his supra and subgingival act without looking away from his operative field. - By the correspondence of this information, he is no longer likely to make movements of his hands harmful and uncontrolled during his work, an advantage all the more important if he wants to constantly monitor his actions in areas inaccessible to the eye, without having use of penetrating radiation (RX _). - By removing the diversion of his eyes from his operative field he will no longer risk causing a wound in the patient's mouth or body because his actions and information attached to the result of his action or helping him for their achievements because they will be permanently visible in his work area. - By the choice to make a correlation between the real view and the invisible view under gingival and bone after treatment of the information, it is possible to use any type of method of precise optical impression taking that it is an impression resulting method using an active light structured or not. It is also possible to use any type of penetrating radiation such as X-rays, ultrasound, MRI or holographic interferometry (OCT _). This method of superimposition and / or substitution in augmented reality is totally independent of the type of reading adopted as are the additional information from augmented reality. - By the use of a central unit, it will be able to memorize the follow-up of all these acts, which is very important during expertises (implantology, temporal or post-operative semiology). - By the absence of any eye movements likely to involve a strong eye gymnastics at a very high rate, the operation will become very relaxing for the clinician. - By the use of glasses with the ability to display augmented reality it will be possible to give information in real time or delayed, at the discretion of the clinician, in the operating field. This includes any additional information that is directly correlated to the visualized field, as it is now possible for augmented reality, but also information from sources of additional information such as those from telemedicine. - Thanks to the additional optional information of the augmented reality, it also allows: - To guide the operator on site by telemedicine but also by expert system or personalized learning if important areas are not treated correctly. - To show specifically and on site sub-gingival information of fragile or important environment. - To warn the clinician during the procedure if it is not perfectly executed. It is possible, for example, to indicate incomplete root treatments, insufficiently or incorrectly positioned implant casings, extractions or incomplete curettage. - To make appear and to allow to visualize on the site the dynamic movements of the instruments used or the parts of the body in treatment during the realization of difficult extractions, the installation of implants or drilling of root canals. - To highlight in the mouth the distribution of dental tissue, for example the proximity of the pulp, during the preparation of cavities for receiving a filling and a crown. - To follow in the mouth and in real time the path followed by any instrument used by the clinician to increase its effectiveness and to avoid accidents on the environment (veins, nerves - By means implemented, the device is simple in its manufacture which makes it particularly resistant.This also allows: - To significantly reduce the manufacturing price, therefore the selling price since the democratization of the electronic elements used as the new generation Condor cameras, the virtual reality glasses or the LEDs - To choose a wired or non-wired connection, including at the camera level, completely freeing the clinician's gestures - To have the stereoscopic natural 3D rendering without having to use 3D screens always expensive and Other objects and advantages of the present invention will become apparent from the following description, relating to a real mode of operation. isation given by way of indicative and non-limiting example. The understanding of this description will be facilitated in view of the attached drawings in which: - Figure 1 is a schematic representation of the entire device comprising all the main elements necessary for its proper functioning but also complementary peripheral elements but not compulsory; - Figure 2 is an overall representation of the prototype in part realized including the camera, the connector, the computer (here a laptop) and possibly a housing containing the processing cards; FIG. 3 represents a complete diagram of the essential elements of the device peculiar to the invention; FIG. 4 represents the different correlation steps 10 between the visible and invisible part allowing the creation of the complemented object based on their common parts, here the crowns of the teeth; FIG. 5 represents the different views of a complemented object observed by the clinician in the mouth of his patient through augmented reality glasses as he moves his gaze; FIG. 6 represents the different planes observable by the clinician in the mouth of the complementarized object when it makes use of the transparency function of the present invention; FIG. 7 represents the view of the complemented object in the application of the present invention during the production of prosthetic preparations; FIG. 8 represents the view of the complementarized object observed in the mouth by the practitioner when he uses a memorized or recognizable and deformable instrument for canal treatment or non-deformable for drilling an implant or a surgical operation; and Fig. 9 is a diagram showing the various steps of the clinical manipulation for carrying out the present invention. The present invention relates to a new device in the dental field of visualization and / or endobuccal measurement directly on the work site that is to say in the mouth of the patient, bringing together in the same three-dimensional reference or slightly offset , (a) the direct view of the teeth and gums in the patient's mouth through the augmented reality glasses, (b) one or more modeling from radiological impressions, OCT and / or MRI, (c) one or several references or models derived from cameras present on augmented reality glasses through which the practitioner visualizes the patient's mouth, (d) possibly one or more references or modelizations resulting from an optical impression using or not using structured light made using an endoscopic camera, so that they complement and / or substitute each other for enrichment using the principle of increased eality, and (e) possibly additional information associated with these provided by other devices, in order to allow the clinician to never look away from his or her work site, especially during gingival, coronary, root canal treatments , surgical or bone to secure, facilitate and optimize its clinical act, this application is not, however, limiting, in that the device is also applicable in the monitoring of their entire clinical activities of the dental office. To do this, this device according to the invention allows to visualize directly in the patient's mouth, through augmented reality glasses, and perfectly correlated both the visible part and the invisible part of the mouth in the form of a single object that is called here the complemented object. The visible part is the surface of the teeth of the gum, the tongue and the inside of the cheeks. It can be seen directly through the glasses but it can also be seen in the form of a digitized view from the reading of a camera by stereodynamic, or several cameras located on the glasses or, more precisely, thanks to digitization obtained by scanning using the endoscopic camera. Digitization can be substituted without this being obligatory, with direct vision in the digitized parts from the least precise to the most precise, that is to say in the most precise order of domination (the endobuccal camera) is substituted for the least accurate (camera glasses) which itself can replace the direct view not digitized.
[0010] The invisible part comes from a reading done separately before the therapeutic action thanks to devices of peripheral type capable of providing images RX, MRI, terahertz or ultrasonic invisible parts which are under the teeth, under the gum or under skin, as are bone, epithelial and connective tissues, vessels and nerves. These devices make it possible to know, memorize and digitize the invisible underlying anatomy in a static or dynamic way. So that these two volume sets, the visible and invisible part forms only one set, the central unit seeks the common parts and joins the two objects by relying on these common parts. These common parts may be anatomical objects such as crowns of teeth or objects reported as are locating wedges fixed in the mouth, for example on the teeth if one wishes to avoid any abnormal mobility. These wedges or anatomical landmarks also serve as a location for tracking the movement of instruments made for this purpose. Preferably, and in certain cases it is possible to digitize at the same time the invisible and visible parts. This is the case if we use an ultrasonic or terahertz device. To do this, the invention consists of 1) a real-time or delayed display device using augmented reality glasses which may be associated with a three-dimensional spatial tracking system (accelerometers / gyroscope and a minimum cameras). ) and whose function is to enable the practitioner, not only to see his operative field in direct vision, but also to have punctual indications or external views as do all glasses of this type for assistance to surgery , which allows him to follow normally the normally visible progress of his work (for example of endocanal treatment or surgical actions) that is to say the external part of the teeth and gums. It also allows it to superimpose correlated images, and this is the essential feature of the invention, issues 2) of a second device.
[0011] This second device is capable of providing RX, MRI, terahertz or ultrasonic images of the invisible parts that lie under the teeth and under the gum as are bone, epithelial and connective tissues, vessels and nerves, and to know, memorize and digitize the invisible underlying anatomy. These two devices 1) and 2) are dependent on 3) a central unit whose function is to scan the views from the cameras on the glasses and on the device in order to correlate them to bring them together in the same repository so that the clinician sees in the mouth of his patient, through his augmented reality glasses, only one object resulting from the fusion of the view that he has naturally through his glasses to which are merged permanently , dynamically and in real time or almost real, the different information from both the external elements of the teeth and the gum but also invisible elements which allows the latter to have in his field of vision in the mouth of his patient the visible part but also the invisible part lying under the gum and under the teeth.
[0012] This allows the user to follow his action without looking away and know the consequence of his act in a part normally inaccessible to his eyes in the mouth. It should be noted that the present device, in order to avoid permanent irradiation of the patient, needs at least only an initial 3D image and that it will correlate it in real time or near real time as a function of field of vision, variable according to the orientation of the practitioner's gaze, which is filmed by the cameras on augmented reality glasses. To this device, and preferentially, will be added 4) an endoscopic camera accurate optical impression taking using coherent or non-coherent radiation, with projection or without projection of active and / or structured light whose function is to make a very accurate reading shapes and colors of the anatomy present in the visible part of the patient's mouth such as his teeth and / or his gum, this impression being correlated and fused by the central unit 3) with the preceding views and more particularly to the less accurate view from the cameras worn by the augmented reality glasses 1) but also and therefore the sub-epithelial images from the external device 2). This allows the clinician to have in his field of work an extremely precise view of the part resulting from the treatment auymentée reality. Possibly and preferentially will be added 5) a lighting system on the endoscopic camera or on the glasses whose function is to optimize the diagnostic analysis such as for example the detection by special radiations of the carious zones on the hard tissues or tumors on the soft tissues. To follow the movements in endodontics or surgery, it is enough to correlate visualized instruments, known or calibrated on the visible and invisible double vision.
[0013] This method optimizes the therapeutic action of the clinician by significantly increasing the security required for his actions while ensuring the structural integrity of the human body and providing a precision of one micron. Above all, it makes it possible to totally free the practitioner from certain constraints compelling him to look at a remote screen, to correlate different images of a visible and invisible zone and to stay close to his work unit. The invention comprises a hardware device and a software system.
[0014] The computer hardware device combines 1) a direct dental visualization system of the visible tissues associated with specific and miniaturized augmented reality, 2) a system for digitizing the underlying tissues invisible to the eye, 3) a central unit of analog / digital conversion, data management and correlation, 4) possibly an accurate three-dimensional endobuccal reading camera with or without structured light, 5) possibly diagnostic-specific endobuccal illumination, and 6) calibrated and known instruments used in visible and invisible fields.
[0015] To this end, the subject of the present invention is a device more precisely comprising a viewing / sensing system with augmented reality glasses, a device making it possible to digitize the parts that are invisible to the eye, a central unit, a precise endobuccal scanning camera. , a lighting system and accessory peripherals. The visualization / capture system with augmented reality glasses 1 makes it possible to see the zone of therapeutic action in direct vision while at the same time being able to correlate and then add, if they have common connection areas, additional information that is invisible to the patient. eye directly from independent devices such as images from reading systems RX, terahertz or ultrasonic. This display, display and sensing system may be constituted, for example, and this is only a non-limiting example for the invention, glasses "Google Glass", "Vuzix smart Glass", "sony "," K-Glass "or" HoloLens ". To these glasses is added one or more cameras making it possible to continuously record, in real time, by alignments or successive mappings, the modeling resulting from the reading of the subgingival peripherals using common references as are, and this is just one example, the crowns of the teeth or cues voluntarily placed on their surfaces or on the gum on what the clinician sees in the patient's mouth. Optionally, advantageously according to an additional feature of the device according to the invention, for financial reasons, the device can rely on a 2D visualization, the glasses have the essential function of displaying additional information with an imprecise adjustment on the area working in relief. The CPU is now able to correlate 2D views on a 3D pattern. It can also create a 3D image using 2 or more radiological 2D images by applying well-known equations today in the case of a 2D 1/2 or 3D visualization, ie to the extent that these glasses have a spatial vision, usually using stereoscopy without this being systematic. The correlation is very exact and the indications are made on parts of the body read in 3 dimensions. This is made possible by the presence of specific dedicated screens existing on this type of glasses. Advantageously and according to an additional feature of the device according to the invention, the presence of a mini micro-USB mini on the branch (right in the case of "Google glass") allows to give orders of vision and appearance of information actually augmented without the operator having to move his gaze from his work area. The device for digitizing the parts invisible to the eye 2 may be an analog radiological system (then passing through a scanning tablet) or digital 2D or 2D 1/2 for example and this is not a limit of the invention , type RVG scanner or tomography. It can also use penetrating coherent optical systems as is for example the OCT. He can also use the 3D imaging principles of MRI or beta cameras. Very recently appeared imaging terahertz. This has the disadvantage of being still inaccurate but it has a great advantage of using a non-ionizing vector. It can be used as a device, part of the invention. The same is true of all ultrasonic systems, whatever their type. The purpose of this second component of the invention is to collect the information invisible to the eye to create a second object to complete the object created during the viewing of the visible parts.
[0016] The central unit 3 allows conversion of analog / digital data and management of these data. The advantage of this system is to digitize the data from the cameras located on the glasses, to digitize and / or to collect the images coming from peripheral devices (RX, MRI, OCT, ultrasound then to reunite them to obtain a single cloud of point to constitute a single object.
[0017] In addition to this meeting, advantageously and according to an additional feature of the device according to the invention, the central unit orients the invisible part according to the orientation of the eye of the clinician, this indication being provided by the cameras, via the marker, and / or additional systems such as gyroscopes or other apparatus to know the positioning of an object, here augmented reality glasses in space. Thanks to this application of our invention, the central unit can monitor the variation of the spatial position of the gaze which will not only to see the invisible part but also to view it directly in the patient's mouth from different angles of view. This feature is important because, clinically, some anatomical structures can hide important areas. The practitioner, by moving his gaze will be able to see what was hidden in the previous angle of view. Advantageously and according to an additional feature of the device according to the invention, the central unit may preferentially display vessels, nerves, bone, roots because current software is able to automatically discern these anatomical structures and display them in different colors. This distinction allows the practitioner to know his field of work, to select it but also to adapt to an anatomy specific to the treated patient. The invention makes it possible to switch from standard anatomy to personalized anatomy, which is particularly important in implantology or in dental surgery. The dentist thus sees in the mouth of his patient the teeth, the gum but also all the underlying structures such as the roots of the teeth, the blood vessels, the nerves ..., from all angles and selectively with possibly specific colors. The endoscopic accurate scanning camera 4 makes it possible to digitize one or more teeth by optical impression using coherent or non-ultrasonic or photonic radiation.
[0018] The invention can use all the cameras used in the dental and medical world, which shows the openness and universality of the invention. This camera can make its metric readings using structured light projections. In this case, the camera has two or more channels, merged or separate, one projection and the other image recovery. A light system structured by LED, OLED, halogen, plasma or laser projects radiation into the teeth in the form of known and structured dots, lines or frames. This structured projection is deformed according to the surfaces it hits and this deformation is transmitted to a sensor by the image recovery path. This allows the camera, by comparison between the characteristics of the projected or memorized light and the deformed light in the space and / or the time that arrives on the sensor, to know the shape and the dimensions of the teeth object of the analysis . There are many endoscopic cameras that meet these characteristics. Advantageously and according to an additional feature of the device according to the invention this camera can use any system for measuring and analyzing the shapes of the teeth and / or the gingiva without projection of structured light. To do so, it can use telemetric or stereoscopic single or multi-camera methods. This system has the advantage of being simpler to design but requires the development of more complex software such as those developed for space. We find some endobuccal cameras, for example, and this is only a non-limiting example, the one we developed under the name of Condor. Advantageously and according to an additional feature of the device according to the invention, it may also include cameras associating the two technologies or other principles such as OCT, ultrasound or X-rays insofar as these provide information metrics on the area and body studied. Of course it is possible to use natural lighting but, as this type of camera has the function of working in dark or inaccessible areas (eg the mouth), it is possible to have a lighting system 5 allowing well-dimensioned illumination of the work area. Advantageously and according to an additional feature of the device according to the invention, the lighting system can display information on the objects measured in augmented reality and in 3 dimensions depending on the type of lighting used. Indeed, according to the choice of certain wavelengths, it is possible to determine and / or to find certain anatomical and pathological elements of the oral-facial sphere invisible or not very visible to the eye and to indicate them in the field of operation, in the form of augmented reality information, unlike direct 2D visualizations on a remote video screen. This allows the diagnosis but also some elements of calibration allowing the correlation between the image of the visible and the underlying to build the complementarized object. The peripheral devices can be: - A source of information 6 directly resulting from the stored functions or software intra or extra muros (telemedicine) providing additional information to help the medical procedure of impression taking and during preparation. - One or more peripheral stations 7 where are visible the information with which the clinician works and which can be seen by his assistants so that they can follow and enrich in real time or deferred the work done (assistance or teaching ..). This treatment can be video and / or digital. - Endobuccal instruments calibrated and correlated to the image of the visible and invisible part allowing to follow the movements in real time in the invisible part. - A numerically controlled machine tool 8 can, at any time, make a real part of the virtual image captured so that this device finds full application in the dental CAD / CAM chain invented by François Duret in 1970 and co inventor of the present patent.
[0019] Is advantageously associated and according to an additional feature of the device according to the invention for transmitting data from the device or its peripherals: - a transmission of all data by cable, telephone, Bluetooth or Wifi. a system of computer hardware for further processing, dialogue / visualization with the operator, the assistants and / or the central unit, for transmitting and storing information, orders and data as permitted by the system microphone; display or other form of communication. In accordance with the present hardware hardware assembly, a software method is provided that meets the requirements of speed and accuracy necessary to those skilled in the dental arts and that significantly facilitates its operative procedure. The original software system includes: - A real-time 3D reconstruction scheme from 2 2D image streams from the 2 or more cameras of the augmented reality display system; 20 - A real-time 3D reconstruction diagram from a flow of 2D, 2D1 / 2 or 3D images from a single Rx device and other capable of viewing the elements invisible to the eye; An algorithm for searching for points of interest on the three optical trace search algorithms (projection on several different cameras of the same 3D point) by calculating points of interest and mapping them through the images; An automatic real-time sequencing algorithm of the spatially coherent sub-sequence image flow making it possible to follow the movement of the clinician's gaze; - An algorithm for parallel estimation of camera positions in space and coordinates of 3D points thanks to optical traces; 35 - A 3D interpolation algorithm of points clouds; - An algorithm for polygonization of 3D point clouds and texture calculation; - An algorithm for scaling 3D reconstructions; - Two spatial precision enhancement algorithms; - Two algorithms for selecting anatomical elements taking into account, among other things, variations in contrast and density; An algorithm for displaying the complemented object enriched with the display selections of the anatomical elements in the complemented object; and algorithms for correlating the dynamic movements of instruments known and used by the practitioner. The overall organization of the algorithm is as follows: The flow of images from the camera (s) is processed in real time so as to produce a first 3D reconstruction viewable by the user as and when he looks around the object. The overall real-time 3D reconstruction scheme and the organization of the data vary according to the availability of the two (or more) cameras of the augmented reality system 1 and the device 2 capturing the invisible information in non-real time. Each newly acquired image is first processed by the optical trace search algorithm. From the correspondences, the sequencing algorithm then updates the sequencing of the video stream for better temporal performance. The parallel estimation algorithm then makes it possible, thanks to the optical traces 1 of the peripherals 2 (RX, ultrasound, MRI a) to find the positions of the cameras in the space at the time of acquisition 30 and b) to generate the cloud 3D dots projecting on the optical traces of the cameras of the glasses and the peripherals. The generated single point cloud is then interpolated (algorithm) to obtain a denser cloud, and an implicit interpolation function is computed. With this function, a textured polygonization of the surface to be reconstructed (algorithm) is obtained. At this stage, it is also possible to calculate the quality indices of the final point cloud. Some points (or areas) can be tagged as invalid or as special (bone, vessels, bone, roots) The textured surface is finally displayed on the screen on the 5 augmented reality glasses , in correspondence with the direct view, possibly with annotations adapted to indicate the still particular areas selected a priori by the clinician The real-time generated surface is a spatial dimensionless representation representing a scale factor near the area This scaling factor can be calculated by the algorithm in quasi-real time hidden real-time or real-time calculation when the acquisition is complete, and finally, the final 3D model can be enhanced by the algorithm. This algorithm recalculates a 3D point cloud taking into account all the acquired views. It is interpolated by the algorithm Finally, a surface and / or volume modeling algorithm comes to reconstruct the displayed global 3D model. We also know that the radiological images generally carry information in 3D point clouds related to elementary units, the voxels, directly correlable to the cloud of points obtained at the level of the precise view made by the endobuccal camera. On the other hand it is impossible to merge the radiological views directly in the mouth with the optical impression views. The operator must follow on a remote screen subcutaneous anatomical environment in which he work and intellectually postpone this view in the space of his operative field. This very often leads to errors of judgment, especially if we admit the phenomenon of arrow, that is to say that a degree of inaccuracy on a shaft insertion of a drill in implantology or prosthesis, lining of dental canal or trocar in medicine, will result in an error of several millimeters to one centimeter deep in the bone. The risk of organ damage to the human body such as nerves, arteries and veins is therefore important.
[0020] Advantageously and according to an additional characteristic of the device according to the invention, it is possible to perform a triple fusion performed at the level of the central unit 3: that of the precise optical impression obtained with the aid of the endobuccal camera 4, that obtained at the level of the radiological analysis 2 that it is in 2D, 2D 1/2 or 3D, and that observed by the cameras, through the glasses in augmented reality, certainly less precise, but serving as support for the two preceding ones. The device, according to the invention, therefore allows the clinician to see, without having to look away, not only a precise surface modeling like any known optical impression system, but in addition, a modeling of what is invisible in his operative field, that is to say the sub-epithelial and bony part, fused with the external part. He thus has before his eyes only one operating field where are visible the external parts and the internal parts normally invisible. Advantageously and according to the invention, it is possible to follow the movements of dental surgery instruments both in the roots (endodontics) and in the bone (surgery and implantology) ensuring a mastery of acts hitherto impossible in real time. Thus it is possible to perform explorations and root or bone treatments by following the movement of the working instrument in the invisible parts to the extent that it has been calibrated in the reference frame of the optical impression taking and / or radiological. The practitioner sees through his augmented reality glasses the outside of the crown, postponed by the visible or even precise sight 4 merged with the general view and through the glasses 1 and increased by the invisible sight of the root (length and form) directly from the shooting device RX, MRI or terahertz 2 but also, which is fundamental, advantageous, and according to an additional feature of the device according to the invention, the movement of its working instruments inside of this root or bone (in surgery and implantology).
[0021] The figures represent different implementations of the device by showing all the possibilities that it offers in the daily practice of the dentist: the augmented reality glasses and the inaccurate visualization cameras 1, the peripheral devices visualizing the invisible parts 2, l central unit 3 digitizing and correlating the two visible and invisible views, the very precise intraoral camera for the visible views 4, and the specific lighting 5. FIG. 1 is a representation of the invention, in the form of a drawing didactic, showing the essential elements and accessories that can be implemented in this enriched visualization device bringing together in a single view the visible and invisible parts through the augmented reality process and allowing the operator to never leave his operative field when he carries out his measurements and / or his diagnoses and / or his clinical acts, finding device a particular interest in the fields of dentistry. The device comprises augmented reality glasses 1, such as for example "Google glass" but this is not limiting because there are other glasses of this type, the practitioner 6 has a natural stereoscopic vision of the mouth, visible on the screen 7 thus of the zone that it measures and that it studies 8. When the operator looks at this work area, the stereoscopic camera (s) 9 forming part of the glasses, observe (nt ) the same scene and is (are) able (s) to proceed to a survey of information leading to the creation of a cloud of points called visualization. As the dentist's head can move relative to the observed area, he has been added an accelerometer / gyrometer / magnetometer 3D close to the eyes facilitating tracking in the clinician's observation axis. This is not mandatory because the software can use the connection areas but greatly facilitates the work of the central unit that dynamically correlates the visible and invisible part (hereinafter called complementarized) when the observer has to move his gaze outside the work area, and come back to continue his work.
[0022] This dynamic correlation means that, whatever the angle of vision, the clinician sees the two parts under different angles, which can be fundamental if in the invisible part, an anatomical structure, for example a tooth root, hides a pathology or an area to work. The invisible file of the mouth is provided by the peripheral imaging system 2. This may be a scanner or tomography systems offering, by assembling their sections, a 2D 1/2 view, preferably leaving the structures visible. bone. For a more complete view have been added very powerful software to distinguish soft tissues on radiological images with little deformation. This was necessary in implantology where the gesture must be precise if one does not want to risk to injure an anatomical element like the nerves or the blood vessels. The cone beam is in this category, it is more and more used because it gives sufficient indications on the hard tissues and soft tissues invisible without too much distort the 2D 1/2 view provided after the software reconstruction. It is possible to have more accurate information directly in 3D in the implementation of the present invention by using a more complex and expensive imaging technique such as MRI or beta-cameras. Finally, still as device 2 of the present invention, can be implemented more recent techniques such as OCT (Optical coherent tomography) or terahertz imaging which has the advantage of not being in common with the MRI , ionizing. Finally remains the ultrasound imaging which can make it possible to visualize the underlying tissues in real time as described in the patent No. FR 83.07840 of May 4, 1983 "method of seizing the form of human organs or pathological abnormalities and device for its implementation ". Although it can not be excluded from the present invention, the problem of ultrasound remains its inaccuracy. In any case, the current peripherals 2 make it possible to digitize the invisible part of the mouth and to separate the various anatomical components in order to make them appear or disappear specifically, because these techniques now know how to distinguish the vein from the artery, the vessels of the nerves, the roots (very dense) of the bone or the root canal of the rest of the root. This will be very important in the clinical manipulation, specific to this invention, which we will describe later. The third part of the present device is the central unit 3 in charge of managing the digital information of the surface of the visible parts transmitted by the cameras of the augmented reality glasses and those of the invisible parts transmitted in real time (for example ultrasound) or deferred (eg cone beam). In particular, it will have to find common areas to correlate the two clouds of points leading to the construction of a single complementarized object (combining visible and invisible in a single point cloud). It is a question of postponing at any moment the invisible sight on the visible sight which the clinician observes by relying on common elements. It is also a question of making this invisible part dominant on the visible part with an adjustable transparency index. The present invention additionally comprises an optical impression taking camera 4 allowing the dentist 6 or the doctor to perform his 3D measurements in the mouth or on the skin of his patient with great precision. This measurement is very precise (a few microns) and very close to the teeth, the depth of field being very low, which explains that it must proceed to a scan of all the teeth 8, by successive photo (one shoot impression) or by 3D filming (full motion). In this case, the two measurements, that obtained with the endobuccal camera 4 and that obtained with the cameras of the glasses having the augmented reality 1 provide two files corresponding to the same area but not having the same precision. These files can be simple electro-optical information or more sophisticated information such as digital representations in the form of point clouds or even surface or volume modelizations. In all cases exist between these two files common values, also used to obtain the complementarized object, such as for example the points located in easily identifiable areas such as the top of the cusps of the teeth 8 or the bottom of their grooves. These common reference values allow the CPU 3 to merge the two files into one while preserving their specificities.
[0023] Similarly, the use of specific lighting 5 can facilitate 3D reading of teeth that have a very specular reflection. This invention is perfectly compatible with this type of camera invented by Duret in 1970 (DDS thesis, 2nd cycle LYON -France 1973). Thus the specific light can be an active and structured projection such as the projection of grids or other patterns. It is also possible to use cameras that do not use structured light but rely on the principles of passive stereoscopy (AWS or other), or on the technique such as flight time (or "time flight") or techniques holographic or its derivatives as OCT. This new device is completely universal and applicable to any form of visualization and / or localized endoral measurements. Unlike the architectural techniques conventionally used by augmented reality glasses that look for specific points, it uses a double optical impression, that resulting from the endobuccal cameras 4 and that made at the same time or in deferred time through augmented reality glasses. 1 to enrich them and / or substitute them according to their degree of precision. Similarly, it is possible to export these data for viewing on a peripheral screen 7 for his assistants with whom he communicates with a microphone on the glasses or independent 11 or to exploit them for a machining 12 of implant guides or anatomical parts while working on the patient 13 allowing him to better understand the immediate environment during his work in the mouth. This machining can be done by subtraction (conventional machining by milling) or by addition (unconventional machining such as laser melting or stereo lithography). FIG. 2 represents the invention in the form of a prototype of which part has already been realized. In the case presented, it is used an intraoral reading camera 4 in stereo scopie passive and specific lighting 5 to measure the visible part of the mouth (the teeth and the gum). The central unit 3 is powerful and conventional, but the software is specific to the invention. The glasses used are the classic "Google Glass" 1 which are attached accelerometers and two cameras. The machine tool 17 is a material removal machine adapted by the inventor's laboratory. Figure 3 is important because it is the diagram representation of the heart of the device, object of the invention.
[0024] There are represented augmented reality viewing glasses 1 allowing the clinician to see the complementarized object, that is to say, his operative field visible in direct vision but also the visible and invisible perfectly correlated and digitized in the form of 'a single virtual object confused with direct vision. The devices 2 able to transmit the information on the invisible part of the mouth are connected or not directly to the central unit and make available this information a priori (RX or in real time (ultrasound The central unit 3 communicates permanently with the glasses so that the complementarized object can be seen from different angles.For that the software rely on the common 3D point clouds between the sight of the invisible part memorized and the 3D view that the clinician observes via the cameras that wears augmented reality glasses.
[0025] The complementarized object must therefore be considered as a stable object in an orthonormal frame with respect to the augmented reality glasses cameras. This object represents a kind of more or less cubic volume around which the observer turns. These are the common references or other adjoint indexes (marker blocks) that allow the clinician to turn around the virtual complementarized object as he would with a hologram. To make this mapping of the two point clouds more reliable, it appears useful to make a more precise survey of the visible surface than does the cameras worn by the augmented reality glasses. For this is added an endoscopic camera 4 allowing the precise digitization of the visible surface, the camera using or not using structured light, the lighting, specific or not, allowing an optimal vision of the mouth of the teeth and the gum . Similarly, to provide a significant complement to the diagnostic facet of the present invention, the device of the invention comprises a specific illumination that optimizes the reading of hard dental tissues (white and blue light) but also makes it possible to highlight certain pathologies of hard tissues or soft tissues (fluorescence, phosphorescence, IR radiation reaction, mixed IR / near UV When carrying out these clinical acts, advantageously and according to an additional feature of the device according to the invention, indications 3 on the physiological state of the patient may appear in the operative field, since it is interesting to know the cardiac state or other general information during particularly traumatic surgery, Figure 4 is a pictorial representation of the stages of construction of the patient. image 2-3D complementarisé.In a first time the clinician realizes a 2D view, 2D 1/2 or 3D (FIG. 4a) thanks to peripheral 2. A 2D view (for example RVG), a 2D 1/2 view (tomo, cone beam, OCT or scanner) or better still, a 3D view (MRI, ultrasound) makes it possible to have information about invisible structures. Like the roots 14 or the crowns of the teeth 15, this information will be oriented towards the hard tissues in radiology or to the soft tissues in MRI, the beam cone being a good compromise between the two. The clinician looks at his operative field in the mouth. Augmented reality glasses carry stereoscopic cameras for visualizing the visible portion of the mouth in 3D, i.e., the crowns of teeth 16 and the surface of the gingiva 17 (FIG. 4b). It can also use an intraoral reading camera / scanner 4 if it wants to have a high accuracy in its reading of the visible part of the complementarized image. This is the case of the image that is presented in Figure 4b. The central unit fetches the common clouds 18 at the first image of the invisible part 35 (radiological here 4a) and the second image of the visible part (here using our condor scanner 4b). There emerges a cloud of common point (Figure 4c). This cloud corresponds to dental crowns 19 because they are indeformable and present in both visible and invisible parts 15. From this common point cloud, the software present in the central unit will bring the two structures together and confuse them at the same level. point cloud to form a single 3D volume object or complementarized object 20 joining the visible portion 16 and invisible. It is this object (figure 4d) that will be displayed on the augmented reality glasses. The dentist thus sees in the mouth of his patient the visible part and the invisible part which allows him to treat not only the crowns but also the roots of teeth and the bone structures of the maxilla. It remains then to the software of the central unit 3 to follow the movements of the eyes of the dentist to allow him to circumvent this complementarized object. To do this, the cameras located on the glasses will continue to follow the different orientations that the point cloud 18 takes with respect to the cameras and therefore the dentist's. It will follow permanent registration of the complementary 3D virtual image displayed on the glasses of the practitioner 1 information complementary to the one he naturally observed on his clinical site. This registration will be permanent and as and when the displacement of his eyes. While the previous view was a lingual view, the next view (Figure 4e) is a buccal view. The clinician looked away and sees the teeth on another side. In this view the vestibular roots are short because it has a more plunging look. The complementarized object 20 composed of the visible and invisible part respects the movement of the gaze and allows to discover the other face of the 3D image. This is particularly interesting because it is possible to see the emergence of the under-chin hole 22 and the exit of the nerves and the blood vessels 23. According to the same principle of the triple fusion endobuccal camera / RX / view in augmented reality, additional characteristic of the device according to the invention, it is possible to know with even more precision the nervous environment, veins, arteries and anatomical structures.
[0026] The dentist therefore knows exactly where he should sting to have a perfect anesthesia of the anterior sector (incisor and canine). He can also see the bone margin of the mandible 24 very important for implantology.
[0027] It goes without saying that an occlusal view, without transparency effect for the vasculo-nervous bundle, respects the visible surface that remains dominant on the invisible surface (Figure 4f). This invention makes it possible to see the entire dental anatomy directly in the mouth, at the clinical site of action, without having to look away or make subjective adjustments to find out where these anatomical elements are. The act becomes precise and secure. We see in Figure 4g the view of the complementarized object associating the visible and invisible part in a single set.
[0028] FIG. 5 illustrates the effect of the clinician's gaze shift (5a) on the vision observed through augmented reality glasses in lingual (5b), occlusal (5c) or vestibular (5d). When he moves his gaze, he is able to see the interior of the complementarized object, normally invisible, either on the vestibular side, the lingual side, or the occlusal side, which allows him to better understand the presence of components 5 Fig. 6 illustrates the effect of the variation of the coefficient or index of transparency (known to users of drawing software such as "Photoshop"). ). In Figure 6a, the gingiva is removed on a plane closest to the observer but the bone is visible. The crown of the tooth 16, the start of the root 25, the loosening of the root 26 and the bone surface 27 are visible. It is also possible to see the mental hole 22 which is so important for anesthesia and the emergence of the nerve 23. In Figure 6b, which is a deeper plane, can be seen the crown 16, the start of the root 25 and its déchaussement 26. 35 are visible in addition and by transparency in the bone the root 14 and the nerve which Provides the tooth 28. As shown in the section on the left, the cortical bone 29 has been removed in favor of the medullary bone 30 which also allows to see a cyst or granuloma 31. In Figure 6c, where the medullary bone has been made transparent, the clinician can see distinctly in the mouth of his patient, in the extension of each crown 16 the root of the teeth 14 but also the nerve out 28 and in the tooth the root canal 32 enclose it in the package neurovascular. The granuloma or cyst are also more visible 31.
[0029] Finally, in the last plane chosen in this example (which is not limiting), the coronary canal 32 connected here to the nerve and to the vessels external to tooth 28, but also the coronal pulp of multiradiculated teeth 33 and monoradiculate teeth 34, are clearly visible. which, of course, makes it possible to know perfectly the position of the pulp horns 36. Indeed, if the complemented 3D object is unique, it retains the knowledge of the visible and invisible part. The dentist will therefore know exactly where to open the tooth and enter the root 36 to reach the nerve 37 with a minimum of decay for the tooth. It would be the same in the bone structure if the clinician wanted to reach a granuloma 31, a cyst or a tumor. These different planes can be chosen freely on foot, keyboard or track. In the same way more local indications can be addressed to him.
[0030] It may be, and this is not limiting, indications on the status of his work in progress or after its completion. For example, in FIG. 7 are indicated the free counters 38 during a preparation of dental prostheses or the installation of an implant, indicating which action and at which level to retouch or modify the work to guarantee a good prosthetic realization. . This indication appears in the form of an overprint of color or texture on the area to be worked. It disappears when the work done has responded to the clinical need.
[0031] Likewise in this figure is indicated the form of the invisible subgingival preparation when it is covered by the gingiva. The supra-gingival part is directly visible in the mouth, the juxta-gingival part is difficult to apprehend by the direct methods whereas in this invention it is very clearly visible 38, as is the subgingival portion 40. This allows the clinician to know perfectly if he has touch-ups to do. When preparing an inlay / onlay 41 indications are given directly in the mouth actually increased on its preparation, indications that will disappear when the preparation will be performed correctly. It will be the same when making bridge. The calculation of the insertion axis 42 resulting from the analysis, for example, of the centers of gravity, will indicate to it the respect of the angle to be made 43, of the zone to be retouched 44 but also of the angle that its cutter 46 must adopt if it is equipped with a 3D spatial position detector 45. As illustrated in FIG. 8, the present invention makes it possible to associate with the complemented object the dynamic tracking of the instruments used by the dentist or the surgeon when he performs a root canal treatment, surgical procedures such as tooth extraction or implant placement. It is possible to follow directly in the mouth and in real time, on the operative field and in the invisible part of the complemented object, without having to look away, its operative act and the movement of the instruments that it uses in a same frame thanks to augmented reality glasses. The clinician can follow in time the movements of the cloud of points or the characteristic or memorized modelings so known a priori of his instruments of work in the oral space. Thus and advantageously and according to an additional characteristic of the device according to the invention, as we see in FIG. 8a, these instruments are handled in the following manner: The first step consists in locating in the space 35 the instrument used. at the start of the operation, using the cameras 9 located on the augmented reality glasses 1 and with specific references 47 (for example an instrument head of a particular shape or a bar code). The instrument used is searched for in a library containing a set of memorized instrument shapes. In this case, the instruments are modeled by software based on its image with a particular identification making them easily identifiable. This may be a mark fixed on the sleeve of the instrument or a wireless or magnetic message, without this being limiting, the principle of the invention being a recognition of the object used by the clinician. - It is also possible to identify it and manually indicate its position on a screen. This has the advantage of facilitating the work of image processing, but requires the practitioner to intervene on the screen. - The second step is to follow the movement of this known instrument and positioned in the space of the complemented object. This tracking is possible, in real time or near real time, by the cameras 9 which locate the movements of the points of the marks previously identified in space by the image processing software. This tracking is therefore a dynamic mapping in real time or slightly deferred of the instrument and the object complemented by the monitoring of these reference marks, characteristics of the instrument used, and the indeformable zones characteristic of the complemented object. . - It may be accompanied by a sound or visual indication if there is a risk of reaching sensitive areas (veins or nerves - It may also be accompanied by a visual or audible indication so that the clinician's action is precise and in the right direction (included teeth or granulomas or cancer) with information allowing ideal orientation, or automatic, or the appearance of a zoom to better visualize if there is a risk, so the practitioner has a view of the movement of these instruments in the complementarized object as if it used a dynamic radio.This is particularly interesting because it can follow the progression without any ionizing radiation.As shown in Figure 8a, the instrument used is composed, for example but this is not limiting, of two parts, the indeformable 48 containing spatial locating elements 47 for recognizing and tracking the object in its movements in space and another part, corr responding to active zone 49, clinically effective. These areas can be confused. Thus and advantageously and according to an additional characteristic of the device according to the invention, it will have two possibilities. Either the instrument is deformable such as, for example, a pin 48, a probe or a wick of endodontic treatments. In this case the instrument is correlated with density or contrast, and this is given only by way of example, of the area into which it is introduced into the complementary object. This zone of the same optical quality 50, in the 3D image (progression zone) can be automatically identified or indicated by the operator. The instrument will deform such that it follows this density or contrast 51. For example a deformable canalar instrument will be introduced into a chamber 50 and then a dental canal 51 which have a density, a level of gray very specific to This instrument, which the software has recognized and modeled, will deform to follow the characteristic density or contrast of the channel. Either the instrument used is indeformable as, for example in Figure 8b, a bur 59 or a needle 58. It crosses the complementarized object without taking into account the densities or contrasts characterizing different anatomical regions. The software is able to anticipate this instrumental movement and the risks of danger that accompany it (meeting a nerve or a vessel, see the risk of perforating a sinus in the upper jaw). Thus and advantageously and according to an additional characteristic of the device according to the invention, the undeformable or deformable instruments 35 are stored in a specific library. This allows the clinician to manually select them or initiate an automatic search. The geometric characteristics of the instrument having been memorized, its integration with the image containing the object of complementarity is particularly easy. This recognition can also be done automatically thanks to the reference reading in various forms (bar code This identification being made that leads to an automatic knowledge of the geometric data of the instrument, its identification easier in the image displayed by the cameras 9 visualization glasses and the monitoring of its movements in the complementarized object Thus and advantageously and according to an additional feature of the device according to the invention, the movements of the deformable or indeformable instrument will be monitored by optical means, but also by any spatial tracking technique (accelerometer, gyroscopes, magnetometers, ultrasound, IR, GPS As shown in Figure 8b, in implantology it is possible to indicate the best position and the best axis of insertion of the drill preparing the site of If the tool 54 is provided with a three-dimensional registration, for example such as that of the patent FR No. 92.08128, but this is not limiting, the software indicates in reality augmented on the display glasses, directly at the height of the drill or the handpiece (the choice), the axis to respect 55, and emits a tone signal with variable tonicity depending on the accuracy or drift 68 of the position or proximity of an anatomical element 66. The local information may also appear superimposed on augmented reality glasses 69 associated with the software in the central unit 4. It indicates all the information 69 in real time and guides to perfectly target the drilling 65-68 and stop it when it is deep enough 67. Similarly, and always in implantology, the The invention indicates which type of implant, which shape or which mark best responds to the three-dimensional environment analyzed by virtue of the triple image-precise / augmented reality-image / X-ray image.
[0032] In some cases, no implant or prosthesis corresponds to the ideal environment visualized in augmented reality by the operator and it is necessary to make the implant or the prosthesis to measure. Advantageously and according to an additional characteristic of the device according to the invention, the central unit 3 is connected to a numerically controlled machine tool 12 for producing this implant or this specific prosthesis, unitary or pleural. When the implant drill approaches a danger zone (here a nerve 37), it is possible to have an enlargement (automatic or on demand) of the risk area 57. This makes it possible to better control the movement of the drill 56 by report to the nerve 37. Finally, as shown in Figure 8c, it is possible to follow a surgical procedure in the complementarized object. In the example 15 presented, the dimensionally stable object 60 used is an elevator, making it possible to arrive exactly at the level of the root 62, which is normally invisible in the mouth. Thanks to the invention, it is possible to see and follow in the complementarized object the progression of the head 61 of the elevator. It is the same for the search for an included tooth 63 located under the bone and the gingiva 27. It goes without saying that this application of the invention is not limited to dentistry but can be applied to any surgical operation on the body or in veterinary medicine. FIG. 9 explains, by means of a diagram, the different clinical steps of manipulation. The first step is to recover, from the device 2, the information of the invisible part, that is to say a 2D 1/2 or 3D view of the underlying tissues 65. This view corresponds to a cloud of 30 points ( the voxels) representing the teeth (crowns 15, roots 14 and ducts of the pulpal tissues), the medullary and cortical bone 24, the vessels and the nerves 23 but also the anatomical geography of its invisible components. A file containing these voxels in the form of a point cloud 66 is sent to the central processing unit 3 in a file supported by STL, Ply or ... Dat.com (this is only an example, each language having a characteristic of its own). The file of the invisible part 67 received by the central unit, the practitioner can take his glasses 1 and visualize his work zone in the mouth of his patient 8 and, using the HMI, put into operation the augmented reality glasses. This allows him to retrieve a second cloud of points 68 from the visible part of his patient's mouth through the action of the external cameras and / or optical impressions 69 and their connections 70 with the central unit 3. S he wishes it the user can reinforce the precision of his cloud of points by using optical impression cameras with or without structured light 71. This action makes it possible to send a cloud of precise points to the central unit 72, this which will enhance the quality of the camera cloud on the augmented reality glasses 68 by relying on the areas common to the scatterplot of the complementarized object and its visible and invisible portions 68. 67 Using specific lighting 74 of the endobuccal camera, it can enrich the information 75 received by the central unit, mainly in the field of diagnosis 76. At this stage, it will have two points clouds 67, 68 reinforced by information s 73 and possibly diagnostic information 76. There is then merger at the central unit 77 and creation of the complementarized object. This object is then transmitted 78 augmented reality glasses so that the complementarized object is displayed in the field of view 78 and that the practitioner can see in the mouth of his patient the visible and invisible parts of his operative field 79. 30 All of these commands are under the control of a specific manual or voice 80 HMI. The connection to the HMI and the practitioner's gesture is all the more free and his vision more direct than the connection between these different units. will do by a long cable or without cable (Wifi, 35 Bluetooth ..). Wireless connections are preferred, but this is not limiting of the invention. If the connections are made by cables, it is preferable to use, for example, a self-powered USB connection. If the connection is wireless, it can be for example in Wifi mode, but this is not limiting according to the invention. In this case, if it is not originally present in the device, the antenna will be added in the camera 4, augmented reality glasses 1 and the other devices 2. Similarly, on the computer 3 or possibly on a intermediate case will be introduced 10 in the USB connection a data transmission and reception antenna corresponding to the orders given by the dentist 6 using his microphone 11, by the program located in the computer 3 or the intermediate box s it does not have this transmission function. This arrangement will allow quick, user-friendly and easy communication whatever the configurations of the dental offices. As shown in the diagram of Figure 9, it is possible to send or receive other important information. This information can enable the clinician 6 to work in comfort and accuracy. This is made possible thanks to the creation of the complemented object and the view through the augmented reality spectacles 1, without its eyes leaving its operating field 8. Thus, and advantageously and according to an additional characteristic of the device according to In the invention, the practitioner will receive not only static information by fusion and the creation of a complemented object 20 but also dynamic by following the movements in time of the devices involved in this complementarized object. At each moment a registration will be made between the complemented object and the movements or variations printed by the clinician's actions and visualized in augmented reality through his glasses 1. All this editing is feasible by following a precise mathematical procedure.
[0033] The precise mathematical procedure can be the Presentation of the software elements usable for the merger of the two visible and invisible parts for the creation of the complemented object. This is just an example of optical trace calculation by mapping points of interest. It makes it possible to explain how to construct a complemented object from the images received from the device 2 and those 2D from the reading using the cameras 9 on the augmented reality glasses and / or those resulting from the imaging cameras. optical 4.
[0034] The search for optical traces of remarkable 3D points between the common part of the visible and invisible cloud is done by searching for points of interest on all the acquired images, then by looking for the correspondences between the points of interest of different images.
[0035] Several diagrams are possible: A first diagram is the optical alignment of angles. The general idea is to calculate remarkable points (angles) on an image, then to follow these points on the following images without having to redetect them. The tracking phase continues as long as a certain percentage of remarkable points of the first image is still detectable (typically 70%); below this threshold, a new phase of detection of remarkable points is conducted on the following image. The detection of angles is done by calculating for every pixel (x, y) (E) 2 (Z) e3;) the matrix 2 * 2 C =, where I denotes the intensity z (30x / Gay ') cyr) 2 w in (x, y) of the image and W a neighborhood of (x, y). Let Xi and X2 be the 2 eigenvalues of this matrix; if these 2 values are above a certain threshold (typically 0.15), the point is considered a remarkable point.
[0036] For the alignment, is sought, between 2 images i and i + 1 and for each remarkable point, the displacement d = (dx, dy) which minimizes This displacement is calculated by d = with C the matrix 2 * 2 previously mentioned and b = (x, y) - Ii + i (x, y)). (x, y) 1 Ew. This optical alignment technique (x, y) - Ii + i (x, y)). Since i i + i (x, y) is reliable on small displacements, the possible large displacements are determined by sequentially calculating the displacement d on a pyramid of images (from a very subsampled version of the images to the initial resolution) . The above techniques are based on the implicit assumption that the flow of images is coherent, ie the displacement between two successive images is small, and 2 successive images are of sufficient quality so that we find a satisfactory quantity of 10 matching points (at least 30). Regarding the displacement between 2 images, the acquisition of the images is at a conventional video stream frequency. It therefore concerns a very small displacement between 2 images. For a more consequent displacement which would make it impossible to find corresponding points with the preceding images, it will be possible to generate a new region. With regard to the insufficient quality of an image (in the event of a fuzzy image for example), the mapping phase acts as a filter since it is clear that very few corresponding points will be found. The image will then be stored without being processed, and we will wait for the next image which will have a sufficient number of corresponding points. A second scheme concerns invariant points and least-squares matching. The points of interest are searched on the images by well-known techniques, which seek points remaining invariant by scaling and illumination. These techniques have the advantage of being able to calculate morphological descriptors for each point of interest. The mapping between points of interest for a given pair of images is performed by searching, for any point of interest xH of the image 1, the point of interest x12 of the image 2 minimizing the distance to least squares in terms of descriptors. To avoid false matches or outliers, we will first calculate the fundamental matrix F between images 1 and 2 (which links the pairs of points of interest by the relation xi1-F-x [2 = 0). If, for a pair of points of interest xii and x12 potentially corresponding to the least squares, the product xi1.F-xf2 is greater than 10-5, this pair is then rejected. The optical trace search is then done by transition during the acquisition of a new image. When the image I is acquired, this assumes that the optical trace calculation has been performed for all the previous images II. J-i. Are calculated then the points of interest of I, that one puts in correspondence with the image Are completed then the optical traces by transition while noticing that if is in correspondence with xii_let if xii-lest in correspondence with then is in correspondence with 2 - A third diagram concerns the strong gradients and the correlation mapping. Points of interest of an image are considered as the set of points where the intensity variations are important. In practice, for each point of the image considered is calculated the standard deviation of the intensities in a neighborhood of 20 * 20 pixels around this point. If the standard deviation is greater than a certain threshold (typically of the order of 10, for intensities coded on 8 bits), the point is then considered as a point of interest. The search for correspondences between 2 images at the level of their points of interest is done by a correlation technique for example and this is not limiting, of the Médicis type (French patents filed on 29.03.2005 EP1756771 (B0453) and EP0600128 ( B0471)). A real-time 3D reconstruction algorithm makes it possible to follow dynamically the movement of an instrument moving in the complemented object. 3D modeling follows three steps. In the first step, the 3D point cloud obtained by optical trace processing is densified by calculating an implicit function f of interpolation. Thanks to this implicit function, the 3D interpolating point surface is polygonalized by the method for example and this is not limiting, of the Bloomenthal type. Finally, each polygon is textured in a very simple way: by projection of the 3D points delimiting the polygon on the images having generated these points, is delimited a polygonal zone on these images. The texture on these polygonal areas is averaged and attributed to the polygon. The main difficulty lies in the algorithm used for the interpolation and the calculation of the implicit function. This algorithm is optimally adapted to our use because it allows real-time interpolation and, unlike other interpolation techniques, it allows dense interpolation from a very scattered initial cloud, which is very often the case when the work concerns objects with little texture like the teeth. We explain below the generic interpolation at the base of this algorithm, then its use in practice in a multiscale scheme. Generic interpolation: Let Pi be the points of the 3D cloud (after estimating the normal ri in these points), we will search for the implicit function 1-41R, based on RadialBasis Functions (RBF) such that the X points belonging to the surface are those for which f (X) = 0. We choose f such that: f (x) = EpiEp [gi (x) Ai] - ci x -, with (1), (x) = ci) (.10), cl) (x) = (1 - r ) 4 + (4r A- 1) The unknowns to be determined to explain f are therefore the gi and the A. Estimation of gi: Consider the point Pi and its normal choose a system (u, v, w) such that u and v are perpendicular to the normal and w points in the direction of the normal. Let h be a function of the form h (u, v) = Au2 + Bur + Cv2, we look in pi for the coefficients A, B and C in order to minimize the following quantity Erie, cP0- (11Pi Pill) - (wi h (iti, vi)) 2 One calculates then gi (x) by gi (x) - = w - h (u, v).
[0037] Estimate of Xi: Knowing that f (Pi) = OVPi, the A1 can be estimated by a simple linear system resolution. Multi-scale interpolation: Generic interpolation is actually carried out on sub-sets of points to greatly improve the interpolation accuracy. We first construct a set (P0, -, Pic1 in the following way: the set P0 is a parallelepiped encompassing the set of points Pi. Between 2 successive levels k-1 and k, we carry out a subdivision of parallelepipeds in 8 small parallelepipeds 10 The function f is computed by an iterative procedure We start from f ° = -1, then we iterate on the sets Pk by updating f: fk (x) = f k-1 (x) + 0 k (x), 0 k (x) = C lie (X) ± Ale) - 4) ale (II X II) E k The e are determined as previously described on the set Pic, and the λi are calculated solving the system fk-i (p) ok (pte) = 15 0. The jack are updated so that o-k + 1 = ok / 2, and the number of levels to be built is defined by M = -1092 (o- 120-1 °). Manipulation of such a system is extremely simple because its parameterization is deemed fixed and can not be modified by the operator with the exception of certain selections pre-manipulations or others (pre-established) requiring clarification during work. In the first we find, for example, the patient's card (clinical card) while in the other we find the instruments that he can use (deformable or not), the selection of the type of vision 25 (for example with or without (b) or the type of diagnosis sought (for example, caries or tumors that do not have the same type of lighting). This function can be controlled by a succession of automatic actions leading to the desired diagnosis. To do this, the operator (dentist, prosthetist or doctor) has a computer indicating the operations that can be performed augmented reality glasses and / or cameras (increased or not a precise reading with the endobuccal scanner) ) by asking him to choose between one function and another. It should be noted that clinical actions are favored over the type of equipment. So we do not indicate on the drop-down menu 5 or in fluorescent light-sensing speech detection but detection. All or part of the processing can be done at the level of the cards included in a generic system (standard laptop or desktop computer) or in a specific system including cards specially dedicated to the application of processing, transmission and data visualization. . This set can be integrated into the unit or separated (for example in a trolley). The first step is to collect the 2D, 2D 1/2 or 3D invisible images from device 2 and store them in deferred time (RX, IR, IRM _) or in near real time (ultrasound, OCT _). When the practitioner finds that the storage is done, he is ready to launch the construction of the 3D or pseudo 3D complemented object. The second step is to take his glasses augmented reality 1 and start reading the cameras 9. The actual image 20 seen through the glasses is enriched successively: First of the 3D modeling of the visible part built with the two cameras 9 fixed on the glasses 1. Although this is possible, for security reasons, there is never elimination of the direct vision in favor of the modeled representation in the field of vision but fusion between the direct view and this modeling from the point cloud (see Duret and Coll patent BV 4). Then the clinician calls the vision of the invisible part from the selected device 2 which, based on the common points, will complement the modeled view of the two cameras 9 to create the complementarized object enclosing the visible part and the invisible part . If the clinician wishes to have a perfect view in good definition of the visible view, he has the possibility of performing a complementary scan using an intraoral optical reading camera 4.
[0038] Possibly and to facilitate the fusion of the two point clouds, those resulting from the reading performed on the invisible part 2 and those resulting from the reading of the visible glasses 1 and 9 and the endobuccal scanner 4, it can indicate using a specific instrument the area common to both files (eg crowns of teeth) directly on the site or on a nearby screen. It may also use a calibration wedge to homogenize the two views in dimensional terms, thanks to multi-scale interpolation software. In fact, in some cases, especially when it comes to correlating the 2D view (s) on the cloud of 3D points of the visible view, the correspondence is more difficult. Based on the attached repository in the invisible 2D view, this wedge makes it easier to work with this software. Advantageously, and according to the invention, LEDs can also play an important role in the correlation of successive views. Indeed, we know that there are methods that base the correlations of the views on landmarks placed in the measured environment or using the similarity found in the cloud itself, or even working on the fuzzy edges of views. All these systems are complex because they require either to place spherical landmarks in the area, a complex operation clinically, or to identify areas often without relief or with a too regular surface condition. Scanning with known wavelength LEDs with color 3D imaging simplifies and automates this procedure. Indeed a simple colored line or the collage of a mark can be identified and visualized automatically if we took care to use a marking or a marking using a complementary, identical, additive or subtractive color of the wavelength of a (or more) scanning LEDs. The identification will be done by a simple enhancement of the chromatic reference whatever it is. This location, which is always in the same position on the object regardless of the angle or the zoom of our optical impressions will serve as a correlation reference.
[0039] The fusion function for complementarized object can be launched using a button located on his glasses or near his clinical headquarters, verbal information with the microphone 11 or a pedal in communication with the computer, and he can stop it when he judges that it is correct or that he has finished his clinical act. For that it stops the pressure or supports a second time. The cameras on the glasses will permanently interpolate the invisible file on the visible file providing a complete view of the above and sub-gingival portion in front of the clinician directly in the mouth and in real time or near real time. It should be noted that this visible / invisible fusion operation can be done on a plaster model in a laboratory, the prosthetist can have augmented reality glasses. This allows the prosthetist to have interesting information when he has to prepare sub-gingival prostheses, removable devices requiring knowledge of the thicknesses of the gingival tissues and bone relief, surgical guides and prefabricated implants or implants in implantology through instead of the plaster of the underlying organs (arteries, subgingival finishing line Without this invention, the operation would be impossible, the software processing makes it possible to calculate in real time the 3D coordinates (x, y, z) and the color of each of the points measured in x, y and z.We obtain a 3D file of a partial or complete arcade in color associated with information of the invisible part. cameras located on the glasses 1, real film of the zone to be visualized, allow a complete survey of the information necessary for the numerical treatments of the set or part of the object to be seen but also measured in the vestibular, lingual and proximal if it wishes to use the stereoscopic measurement function that can offer a camera creating clouds of points (see patent Duret and Coll. FR 14.54774). These zones are automatically merged by the software on the previous imprecise views by using the same common reference system (for example tooth crowns.) This same detection of the common zones can be done at the level of the modeling curves (Nurbs, radial basis functions, wavelets If the practitioner decides to use the diagnostic function, he selects on the computer or verbally the type of diagnosis desired, for example melanoma or caries detection, the camera will launch a scanning wavelength corresponding to the highlighting the areas of interest for the preselected wavelengths present on a 3D image In addition to this, and thanks to the 3D analysis of the object, the recovery of the measurements over time will make it possible to better follow the evolution of the said pathology.It is indeed admitted by professionals that the study of a suspicious image can be made in 2D but it is especially the evolution of s volume and its color which serves as a reference to monitoring in the 15 time of its dangerousness. The fact of having a volume referred to a mathematical center (as for example the center of gravity) makes it possible to superimpose the images on a center dependent on the object and not on the observer in order to objectively appreciate the evolution of the volume, the analysis of the color coming to refer to a 3D form, which is not the case today methods practiced on 2D surfaces or those using lights or structured waves (OCT, scanner or MRI ). Likewise, by selecting certain wavelengths emitted by the LEDs present around the reading window and by increasing their frequencies and / or their intensities, we can report on a 3D image the visualization of certain anatomies or pathologies located at any depth in addition to those visualized by the device visualizing the invisible part. The knowledge of the volume gives us an indication of the positioning of this pathological limit allowing us to predict and visualize its evolution. This is the case of fluorescence reactions of certain tissues with blue or UV radiation. The fluorescence appears not only on the surface but in the depth of the pathology allowing us to provide assistance to the therapy to be applied (removal of the pathological tissues). Knowing the penetration of this or that radiation, we can appreciate its importance and depth in relation to the actual 3D surface analyzed. It follows from the foregoing description that the present invention responds perfectly to the problems posed, in this sense, that it provides a real answer to the visualization of the visible and invisible areas and more particularly to their meetings in the same referential allowing the visualization of the complementarized object directly in the mouth on the clinical site. It allows an immediate and pathological anatomical analysis of pathologies of the gingiva and underlying tissues. It also appears from this description, that it allows, to solve the fundamental problems, such as the control of the clinical act, especially as no alternative method has been proposed. It goes without saying that the invention is not limited to the only embodiment of this method, nor to the embodiments of the device for the implementation of this method, written above as an example. . It embraces, on the contrary, all the variants of implementations and achievements. It is thus, in particular, that it is possible to measure the oral pathologies that concern hard tissues and soft tissues. As we understand it, we propose a universal device of visualization and measurement of the visible and invisible parts during the clinical act taken in its field of application, answering many requests in terms of cost, user-friendliness, help with measurement or diagnostic imaging in dentistry. This system can for example be applied, in an evolutionary form, to any 3D acquisition requiring a fast and precise manipulation obliging the operator not to take his eyes off his field of work, analysis and / or measurement. This is the case of work performed on all parts of the human body, the acquisition of data not to be disturbed by sudden movements of the patient, rapid actions such as sports gestures or industrial production procedures especially in hostile environment. It is thus possible to follow and inform the operator in real time or near real time while allowing him not to leave the scene of the eyes and displaying him additional information. It follows from the foregoing description that the present invention responds perfectly to the problems posed, in this sense, that it provides a real answer to the visualization of visible and invisible areas and more particularly to their meetings in the same frame of reference for visualization. of the complementarized object directly in the mouth on the clinical site. It allows an immediate and pathological anatomical analysis of pathologies of the gingiva and underlying tissues. It also appears from this description, that it allows, to solve the fundamental problems, such as the control of the clinical act, especially as no alternative method has been proposed. It goes without saying that the invention is not limited to the only embodiment of this method, nor to the embodiments of the device for the implementation of this method, written above as an example. . It embraces, on the contrary, all the variants of implementations and achievements. It is thus, in particular, that it is possible to measure the oral pathologies that concern hard tissues and soft tissues. The invention can be used to perform medical or nursing procedures: it is possible that the display device aids in locating the anatomical elements for subcutaneous stitching, intravenous stitching or catheterization; it is also possible that it helps for the study of skin and gingival thicknesses between the bone and the surface of the skin or gum.
权利要求:
Claims (27)
[0001]
REVENDICATIONS1. Device for visualizing the inside of a patient's mouth (13), the display device comprising a penetrating ray emitter (2) adapted to take a view of an inner part (14, 22, 23, 27 , 28, 31, 32, 33, 34, 35, 36, 37, 50, 51, 62, 63) located under an external surface of a member disposed in the mouth, characterized in that it comprises a pair of glasses augmented reality device (1) having, on the one hand, an optical glass through which a user (6) of the pair of spectacles (1) can see the inside of the mouth, and, on the other hand, a camera display unit (9) adapted to take an image of what the user (6) sees through the optical glass, a central unit (3) being adapted to correlate first images corresponding to those taken by the display camera (9) with second images corresponding to those taken by the penetrating ray transmitter (2).
[0002]
2. Display device according to claim 1, characterized in that the central unit (3) is adapted to orient the second images according to the orientation of the pair of augmented reality glasses (1).
[0003]
3. Display device according to one of claims 1 and 2, characterized in that the central unit (3) is adapted to project on the optical glass the correlation of the first images with the second images.
[0004]
4. Display device according to claim 3, characterized in that the central unit (3) is adapted to project on the optical glass, on the user's command (6), images of a selection of anatomical components of the organ taken by the penetrating ray transmitter (2).
[0005]
5. Display device according to one of claims 1 to 4, characterized in that it comprises an optical acquisition camera (4) adapted to take an optical impression of an outer surface of a member disposed in the mouth, the central unit (3) being adapted to correlate third images corresponding to those taken by the optical impression camera (4) with the first images.
[0006]
6. Display device according to one of claims 1 to 5, characterized in that the correlation of the images made by the central unit (3) is a superposition and / or a replacement of the images on the optical glass.
[0007]
7. Display device according to one of claims 1 to 6, characterized in that the central unit (3) is adapted, on the order of the user (6), to change the contrast and transparency 10 of the images that 'he treats.
[0008]
8. Display device according to one of claims 1 to 7, characterized in that the penetrating ray transmitter (2) is adapted to transmit digitally to the central unit (3) the images it takes. 15
[0009]
9. Display device according to one of claims 1 to 7, characterized in that it comprises a scanning device adapted to digitize non-digital images transmitted by the penetrating ray transmitter (2) and to transmit the digitized images at the central unit (3). 20
[0010]
10. Display device according to one of claims 1 to 9, characterized in that the central unit (3) is adapted to project on the optical glass additional information relating to the patient (13).
[0011]
11. Display device according to claim 10, characterized in that the additional information relating to the patient (13) comprises data to be respected for producing a dental prosthesis.
[0012]
12. Display device according to one of claims 10 or 11, characterized in that it comprises at least one peripheral instrument connected to the central unit (3) and adapted to capture additional information relating to the patient (13) .
[0013]
13. Display device according to claim 12, characterized in that one of the peripheral instrument allows either to capture static occlusion and mandibular movements, or to capture the color of the teeth, or to capture the shape of the face, or to capture the physiological data of the patient (13).
[0014]
14. Display device according to one of claims 1 to 13, characterized in that it comprises a microphone (11) adapted to pick up control commands from the user (6) and transmit them to the unit central (3).
[0015]
15. Display device according to one of claims 1 to 14, characterized in that the pair of augmented reality glasses (1) comprises a spatial tracking instrument.
[0016]
16. Display device according to one of claims 1 to 15, characterized in that it comprises a lighting system (5) 15 adapted to illuminate the organ disposed in the mouth.
[0017]
17. Display device according to claim 16, characterized in that the lighting system (5) comprises light emitting diodes whose wavelength is adapted to allow the identification of pathologies. 20
[0018]
18. Display device according to one of claims 1 to 17, characterized in that the central unit (3) is adapted to project on a remote screen (7) images relating to the organ disposed in the mouth.
[0019]
19. Medical monitoring assembly characterized in that it comprises a display device according to one of claims 1 to 18 and a medical treatment instrument (46, 47, 48, 49, 54, 58, 59) which comprises on the one hand, a tool (46, 49, 58, 59) adapted to process anatomical components of an organ with which it is in contact, and, on the other hand, a marker (47) adapted to be located Spatially by the display device during the treatment of the anatomical components, the central unit (3) being adapted to know the dimensions of the tool (46, 49, 58, 59) and the distance separating the tool (46, 49, 58, 59) of the marking (47), and determining the position of the tool (46, 49, 58, 59) in the member during processing.
[0020]
20. Medical monitoring assembly according to claim 19, characterized in that the central unit (3) is adapted to produce fourth images which represent the tool (46, 49, 58, 59) used for the treatment, the correlating with the second images, and projecting the correlation so as to allow the display of the tool (46, 49, 58, 59) in the organ being processed.
[0021]
21. Medical monitoring assembly according to one of claims 19 and 20, characterized in that, the length of the movement of the tool (46, 49, 58, 59) being equal to the length of the displacement of the marker (47). , the central unit (3) is adapted to determine the direction and direction of movement of the tool (46, 49, 58, 59) relative to the anatomical components with which it is in contact, the direction and direction of the movement of the tool (46, 49, 58, 59) being equal to the direction and direction of movement of the mark (47) if the tool (46, 49, 58, 59) is dimensionally stable with respect to these components anatomical, or determined by the relief of these anatomical components if the tool (46, 49, 58, 59) is deformable relative thereto.
[0022]
22. Medical monitoring assembly according to one of claims 19 to 21, characterized in that the central unit (3) is adapted to determine the ideal movement (55) of the tool (46, 49, 58, 59) used for carrying out a treatment.
[0023]
23. Medical monitoring assembly according to claim 22, characterized in that the central unit (3) is adapted to guide the user (6) for the tool (46, 49, 58, 59) used to follow the movement. ideal (55).
[0024]
24. A medical monitoring assembly according to claim 23, characterized in that the user guidance (6) is achieved by displaying the ideal motion (55) on the optical lens correlated with the second images.
[0025]
25. Medical monitoring set according to one of claims 23 and 24, characterized in that the guidance of the user (6) is achieved by the emission of a sound signal depending on the position of the tool (46). , 49, 58, 59) used.
[0026]
26. Medical monitoring assembly according to one of claims 19 to 25, characterized in that the tool (46, 49, 58, 59) used is identified by an identifier and in that the central unit (3) is adapted to receive the identifier and to determine the corresponding tool (46, 49, 58, 59).
[0027]
27. A medical monitoring kit according to claim 26, characterized in that the central unit (3) comprises a library of identifiers, each identifier corresponding to a tool (46, 49, 58, 59) forming part of the display device. .
类似技术:
公开号 | 公开日 | 专利标题
FR3032282A1|2016-08-05|DEVICE FOR VISUALIZING THE INTERIOR OF A MOUTH
EP3148402B1|2021-01-13|Device for viewing the inside of the mouth of a patient
JP6223331B2|2017-11-01|Three-dimensional measuring device used in the dental field
US20210038324A1|2021-02-11|Guided surgery apparatus and method
KR102351703B1|2022-01-17|Identification of areas of interest during intraoral scans
US7912257B2|2011-03-22|Real time display of acquired 3D dental data
EP2677938B1|2019-09-18|Space carving in 3d data acquisition
US20200359777A1|2020-11-19|Tracked toothbrush and toothbrush tracking system
JP5043145B2|2012-10-10|Diagnostic system
Zhou et al.2019|Towards AR‐assisted visualisation and guidance for imaging of dental decay
US20210386512A1|2021-12-16|Method Tool And Method Of Operating A Machine Tool
BR112020023692A2|2021-02-17|method of analyzing a patient's real dental situation and using this
同族专利:
公开号 | 公开日
CN107529968B|2020-07-28|
FR3032282B1|2018-09-14|
US9877642B2|2018-01-30|
EP3253279A1|2017-12-13|
CN107529968A|2018-01-02|
WO2016124847A1|2016-08-11|
US20160220105A1|2016-08-04|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
US6006126A|1991-01-28|1999-12-21|Cosman; Eric R.|System and method for stereotactic registration of image scan data|
US20040119662A1|2002-12-19|2004-06-24|Accenture Global Services Gmbh|Arbitrary object tracking in augmented reality applications|
US20090124890A1|2005-02-18|2009-05-14|Raymond Derycke|Method and a System for Assisting Guidance of a Tool for Medical Use|
US20120056993A1|2010-09-08|2012-03-08|Salman Luqman|Dental Field Visualization System with Improved Ergonomics|EP3636159A1|2018-10-09|2020-04-15|Ivoclar Vivadent AG|Dental tool control system|
CN112469359A|2018-05-22|2021-03-09|牙科监测公司|Method for analyzing dental conditions|
WO2021245274A1|2020-06-06|2021-12-09|Querbes Olivier|Taking an optical impression of a patient's dental arch|GB1086094A|1964-11-24|1967-10-04|United Steel Companies Ltd|Methods of and apparatus for use in the continuous casting of steel|
FR2536654B1|1982-11-30|1987-01-09|Duret Francois|METHOD FOR PRODUCING A DENTAL PROSTHESIS|
FR2610821B1|1987-02-13|1989-06-09|Hennson Int|METHOD FOR TAKING MEDICAL IMPRESSION AND DEVICE FOR IMPLEMENTING SAME|
EP0600128A1|1992-11-30|1994-06-08|En-Tech Research Institute Inc.|An immobilization agent for industrial waste|
US6847336B1|1996-10-02|2005-01-25|Jerome H. Lemelson|Selectively controllable heads-up display system|
IL125659A|1998-08-05|2002-09-12|Cadent Ltd|Method and apparatus for imaging three-dimensional structure|
FR2868168B1|2004-03-26|2006-09-15|Cnes Epic|FINE MATCHING OF STEREOSCOPIC IMAGES AND DEDICATED INSTRUMENT WITH A LOW STEREOSCOPIC COEFFICIENT|
WO2005104926A1|2004-04-30|2005-11-10|J. Morita Manufacturing Corporation|Living body observing apparatus, intraoral imaging system, and medical treatment appliance|
US20130242262A1|2005-10-07|2013-09-19|Percept Technologies Inc.|Enhanced optical and perceptual digital eyewear|
US7372642B2|2006-02-13|2008-05-13|3M Innovative Properties Company|Three-channel camera systems with non-collinear apertures|
US8314815B2|2006-04-12|2012-11-20|Nassir Navab|Virtual penetrating mirror device for visualizing of virtual objects within an augmented reality environment|
JP5390377B2|2008-03-21|2014-01-15|淳 高橋|3D digital magnifier surgery support system|
FR2960962B1|2010-06-08|2014-05-09|Francois Duret|DEVICE FOR THREE DIMENSIONAL AND TEMPORAL MEASUREMENTS BY COLOR OPTICAL FOOTPRINT.|
WO2012075155A2|2010-12-02|2012-06-07|Ultradent Products, Inc.|System and method of viewing and tracking stereoscopic video images|
US9135498B2|2012-12-14|2015-09-15|Ormco Corporation|Integration of intra-oral imagery and volumetric imagery|
JP2015002791A|2013-06-19|2015-01-08|ソニー株式会社|Wireless communication system, wireless terminal apparatus, and storage medium|
DE102014206004A1|2014-03-31|2015-10-01|Siemens Aktiengesellschaft|Triangulation-based depth and surface visualization|
AU2015342962A1|2014-11-06|2017-06-29|Shane MATT|Three dimensional imaging of the motion of teeth and jaws|
US10013808B2|2015-02-03|2018-07-03|Globus Medical, Inc.|Surgeon head-mounted display apparatuses|FR3010629B1|2013-09-19|2018-02-16|Dental Monitoring|METHOD FOR CONTROLLING THE POSITIONING OF TEETH|
US10449016B2|2014-09-19|2019-10-22|Align Technology, Inc.|Arch adjustment appliance|
FR3027507B1|2014-10-27|2016-12-23|H 42|METHOD FOR CONTROLLING THE DENTITION|
US10154239B2|2014-12-30|2018-12-11|Onpoint Medical, Inc.|Image-guided surgery with surface reconstruction and augmented reality visualization|
US10504386B2|2015-01-27|2019-12-10|Align Technology, Inc.|Training method and system for oral-cavity-imaging-and-modeling equipment|
US20160242623A1|2015-02-20|2016-08-25|Cefla Societá Cooperativa|Apparatus and method for visualizing data and images and for controlling a medical device through a wearable electronic device|
US10222619B2|2015-07-12|2019-03-05|Steven Sounyoung Yu|Head-worn image display apparatus for stereoscopic microsurgery|
US11103330B2|2015-12-09|2021-08-31|Align Technology, Inc.|Dental attachment placement structure|
US9861446B2|2016-03-12|2018-01-09|Philipp K. Lang|Devices and methods for surgery|
WO2017218951A1|2016-06-17|2017-12-21|Align Technology, Inc.|Orthodontic appliance performance monitor|
EP3471599A4|2016-06-17|2020-01-08|Align Technology, Inc.|Intraoral appliances with sensing|
EP3578131B1|2016-07-27|2020-12-09|Align Technology, Inc.|Intraoral scanner with dental diagnostics capabilities|
US10507087B2|2016-07-27|2019-12-17|Align Technology, Inc.|Methods and apparatuses for forming a three-dimensional volumetric model of a subject's teeth|
CN113648088A|2016-11-04|2021-11-16|阿莱恩技术有限公司|Method and apparatus for dental images|
DE102016121687A1|2016-11-11|2018-05-17|Ident-Ce Gbr|Intraoral scanners and methods for digital dental impressioning in the dental field|
US11026831B2|2016-12-02|2021-06-08|Align Technology, Inc.|Dental appliance features for speech enhancement|
CN110062609B|2016-12-02|2021-07-06|阿莱恩技术有限公司|Method and apparatus for customizing a rapid palate expander using a digital model|
US10888399B2|2016-12-16|2021-01-12|Align Technology, Inc.|Augmented reality enhancements for dental practitioners|
US10548700B2|2016-12-16|2020-02-04|Align Technology, Inc.|Dental appliance etch template|
US10467815B2|2016-12-16|2019-11-05|Align Technology, Inc.|Augmented reality planning and viewing of dental treatment outcomes|
EP3568070A4|2017-01-16|2020-11-11|Philipp K. Lang|Optical guidance for surgical, medical, and dental procedures|
US10779718B2|2017-02-13|2020-09-22|Align Technology, Inc.|Cheek retractor and mobile device holder|
US10546427B2|2017-02-15|2020-01-28|Faro Technologies, Inc|System and method of generating virtual reality data from a three-dimensional point cloud|
US10613515B2|2017-03-31|2020-04-07|Align Technology, Inc.|Orthodontic appliances including at least partially un-erupted teeth and method of forming them|
CN107157588B|2017-05-08|2021-05-18|上海联影医疗科技股份有限公司|Data processing method of image equipment and image equipment|
US20180338119A1|2017-05-18|2018-11-22|Visual Mobility Inc.|System and method for remote secure live video streaming|
US10610179B2|2017-06-05|2020-04-07|Biosense WebsterLtd.|Augmented reality goggles having X-ray protection|
US11045283B2|2017-06-09|2021-06-29|Align Technology, Inc.|Palatal expander with skeletal anchorage devices|
US10639134B2|2017-06-26|2020-05-05|Align Technology, Inc.|Biosensor performance indicator for intraoral appliances|
US10885521B2|2017-07-17|2021-01-05|Align Technology, Inc.|Method and apparatuses for interactive ordering of dental aligners|
US11116605B2|2017-08-15|2021-09-14|Align Technology, Inc.|Buccal corridor assessment and computation|
WO2019036677A1|2017-08-17|2019-02-21|Align Technology, Inc.|Dental appliance compliance monitoring|
CN107789072B|2017-09-25|2020-06-16|北京缙铖医疗科技有限公司|Intracranial lesion body surface holographic projection positioning system based on head-mounted augmented reality equipment|
US10813720B2|2017-10-05|2020-10-27|Align Technology, Inc.|Interproximal reduction templates|
EP3476357A1|2017-10-24|2019-05-01|GuideMia BiotechnologiesLtd.|An operational system on a workpiece and method thereof|
US11096763B2|2017-11-01|2021-08-24|Align Technology, Inc.|Automatic treatment planning|
CN111417357A|2017-11-30|2020-07-14|阿莱恩技术有限公司|Sensor for monitoring oral appliance|
US11058497B2|2017-12-26|2021-07-13|Biosense WebsterLtd.|Use of augmented reality to assist navigation during medical procedures|
US10980613B2|2017-12-29|2021-04-20|Align Technology, Inc.|Augmented reality enhancements for dental practitioners|
US11013581B2|2018-01-26|2021-05-25|Align Technology, Inc.|Diagnostic intraoral methods and apparatuses|
CN108389488B|2018-03-05|2020-12-15|泉州医学高等专科学校|Interactive oral cavity simulation system|
DE102018204098A1|2018-03-16|2019-09-19|Sirona Dental Systems Gmbh|Image output method during a dental application and image output device|
US10835352B2|2018-03-19|2020-11-17|3D Imaging and Simulation Corp. Americas|Intraoral scanner and computing system for capturing images and generating three-dimensional models|
EP3813713A4|2018-05-10|2022-01-26|Live Vue Tech Inc|System and method for assisting a user in a surgical procedure|
CN112584791A|2018-06-19|2021-03-30|托尼尔公司|Neural network for diagnosing shoulder disorders|
CN109259881B|2018-09-22|2021-08-27|上海复敏贝浙健康管理有限公司|Non-handheld intelligent tooth cleaner and use method thereof|
EP3873379A1|2018-11-01|2021-09-08|3Shape A/S|System for measuring periodontal pocket depth|
EP3649919A1|2018-11-12|2020-05-13|Ivoclar Vivadent AG|Dental imaging system|
IT201800011117A1|2018-12-14|2020-06-14|Marco Farronato|SYSTEM AND METHOD FOR THE VISUALIZATION OF AN ANATOMICAL SITE IN AUGMENTED REALITY|
RU2693993C1|2019-01-28|2019-07-08|федеральное государственное автономное образовательное учреждение высшего образования "Российский университет дружбы народов" |Method for computer simulation of recovery of biomechanical dental parameters for uniform distribution of masticatory load on supporting dental tissues and bone tissue|
CN110459083B|2019-08-22|2020-08-04|北京众绘虚拟现实技术研究院有限公司|Vision-touch fused augmented reality oral surgery skill training simulator|
JP2021109033A|2020-01-15|2021-08-02|株式会社モリタ製作所|Cap, imaging device, data generation system, and data generation method|
RU2738706C1|2020-07-14|2020-12-15|Федеральное государственное бюджетное образовательное учреждение высшего образования "Пермский государственный медицинский университет имени академика Е.А. Вагнера" Министерства здравоохранения Российской Федерации|Method for making a partial removable plate denture|
法律状态:
2016-02-24| PLFP| Fee payment|Year of fee payment: 2 |
2016-08-05| PLSC| Publication of the preliminary search report|Effective date: 20160805 |
2017-02-22| PLFP| Fee payment|Year of fee payment: 3 |
2018-02-26| PLFP| Fee payment|Year of fee payment: 4 |
2020-02-18| PLFP| Fee payment|Year of fee payment: 6 |
2021-02-17| PLFP| Fee payment|Year of fee payment: 7 |
2022-02-22| PLFP| Fee payment|Year of fee payment: 8 |
优先权:
申请号 | 申请日 | 专利标题
FR1550817A|FR3032282B1|2015-02-03|2015-02-03|DEVICE FOR VISUALIZING THE INTERIOR OF A MOUTH|
FR1550817|2015-02-03|FR1550817A| FR3032282B1|2015-02-03|2015-02-03|DEVICE FOR VISUALIZING THE INTERIOR OF A MOUTH|
US15/007,010| US9877642B2|2015-02-03|2016-01-26|Device for viewing an interior of a mouth|
EP16707857.5A| EP3253279A1|2015-02-03|2016-02-01|Device for viewing the inside of a mouth|
PCT/FR2016/050206| WO2016124847A1|2015-02-03|2016-02-01|Device for viewing the inside of a mouth|
CN201680020959.9A| CN107529968B|2015-02-03|2016-02-01|Device for observing inside of oral cavity|
[返回顶部]